27 research outputs found

    Competitive Parallel Disk Prefetching and Buffer Management

    Get PDF
    We provide a competitive analysis framework for online prefetching and buffer management algorithms in parallel I/O systems, using a read-once model of block references. This has widespread applicability to key I/O-bound applications such as external merging and concurrent playback of multiple video streams. Two realistic lookahead models, global lookahead and local lookahead, are defined. Algorithms NOM and GREED based on these two forms of lookahead are analyzed for shared buffer and distributed buffer configurations, both of which occur frequently in existing systems. An important aspect of our work is that we show how to implement both the models of lookahead in practice using the simple techniques of forecasting and flushing. Given a -disk parallel I/O system and a globally shared I/O buffer that can hold upto disk blocks, we derive a lower bound of on the competitive ratio of any deterministic online prefetching algorithm with lookahead. NOM is shown to match the lower bound using global -block lookahead. In contrast, using only local lookahead results in an competitive ratio. When the buffer is distributed into portions of blocks each, the algorithm GREED based on local lookahead is shown to be optimal, and NOM is within a constant factor of optimal. Thus we provide a theoretical basis for the intuition that global lookahead is more valuable for prefetching in the case of a shared buffer configuration whereas it is enough to provide local lookahead in case of the distributed configuration. Finally, we analyze the performance of these algorithms for reference strings generated by a uniformly-random stochastic process and we show that they achieve the minimal expected number of I/Os. These results also give bounds on the worst-case expected performance of algorithms which employ randomization in the data layout

    Hail: A language for easy and correct device access

    Get PDF
    ABSTRACT It is difficult to write device drivers. One factor is that writing low-level code for accessing devices and manipulating their registers is tedious and error-prone. For many system-on-chip based systems, buggy hardware, imprecise documentation, and code reuse worsen the situation further. This paper presents HAIL (Hardware Access Interface Language), a language-based approach to simplify device access programming and generate error checking code against bugs in software, hardware, and documentation. HAIL is a domain-specific language that specifies all aspects of a device's programming interface and the access methods in a particular system and OS. A compiler automatically checks the specification and translates it into C code for device access, with optional debugging code. The generated code can be included directly into device driver code. In the paper, we argue that HAIL lowers development effort, incurs minimal runtime overhead, and reduces device access related bugs. We also show that the HAIL specification can be reused for different operating systems, thereby reducing porting costs

    Competitive prefetching and buffer management for parallel I/O systems

    No full text
    In this thesis we study prefetching and buffer management algorithms for parallel I/O systems. Two models of lookahead, global and local, which give limited information regarding future accesses are introduced. Two configurations of the I/O buffer, shared and distributed, are considered, based upon the accessibility of the I/O buffer. The performance of prefetching algorithms using the two forms of lookahead is analyzed in the framework of competitive analysis, for read-once access patterns. Two algorithms, PHASE and GREED, which match the lower bounds are presented. A randomized version of GREED that performs the minimal expected number of I/Os is designed and applied to the problems of external sorting and video retrieval. Finally the problem of designing prefetching and buffer management algorithms for read-many reference strings is examined. An algorithm which uses randomized write-back to attain good expected I/O performance is presented

    Prefetching and buffer management for parallel I/O systems

    No full text
    In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O latency by caching blocks to avoid repeated disk accesses for the same block, and also by buffering prefetched blocks and making the load on disks more uniform. To make best use of available parallelism and locality in I/O accesses, it is necessary to design prefetching and caching algorithms that schedule reads intelligently so that the most useful blocks are prefetched into the buffer and the most valuable blocks are retained in the buffer when the need for evictions arises. This dissertation focuses on algorithms for buffer management in parallel I/O systems. Our aim is to exploit the high parallelism provided by multiple disks to reduce the average read latency seen by an application. The thesis is that traditional greedy strategies fail to exploit I/O parallelism thereby necessitating new algorithms to make use of the available I/O resources. We show that buffer management in parallel I/O systems is fundamentally different from that in systems with a single disk, and develop new algorithms that carefully decide which blocks to prefetch and when, together with which blocks to retain in the buffer. Our emphasis is on designing computationally simple algorithms for optimizing the number of I/Os performed. We consider two classes of I/O access patterns, read-once and read-often, based on the frequency of accesses to the same data. With respect to buffer management for both classes of accesses, we identify fundamental bounds on performance of online algorithms, study the performance of intuitive strategies, and present randomized and deterministic algorithms that guarantee higher performance

    USENIX Association Proceedings of FAST ’03:

    No full text
    Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. Plutus: Scalable secure file sharing on untrusted storag

    Analysis of simple randomized buffer management for parallel I/O

    No full text
    Buffer management for a D-disk parallel I/O system is considered in the context of randomized placement of data on the disks. A simple prefetching and caching algorithm PHASE-LRU using bounded lookahead is described and analyzed. It is shown that PHASE-LRU performs an expected number of I/Os that is within a factor (logD/log logD) of the number performed by an optimal off-line algorithm. In contrast, any deterministic buffe

    ASP: Adaptive Online Parallel Disk Scheduling

    No full text
    In this work we address the problems of prefetching and I/O scheduling for read-once reference strings in a parallel I/O system. We use the standard parallel disk model with D disks a shared I/O buffer of size M. We design an on-line algorithm ASP (Adaptive Segmented Prefetching) with ML-block lookahead, L 1, and compare its performance to the best on-line algorithm with the same lookahead. We show that for any reference string the number of I/Os done by ASP is with a factor \Theta(C), C = minf
    corecore